Skip to content

skip unitest#60

Closed
meichangsu1 wants to merge 17 commits intodevfrom
seq_unitest_ljl_fix
Closed

skip unitest#60
meichangsu1 wants to merge 17 commits intodevfrom
seq_unitest_ljl_fix

Conversation

@meichangsu1
Copy link
Copy Markdown
Collaborator

No description provided.

Replace calls to `_get_sp_group_from_device_mesh` with direct access to `sequence_parallel._sp_group` in sequence parallel attention tests. This simplifies the test setup by using the already initialized group stored in the module, improving code clarity and reducing redundancy.
…d functions

Replace manual gradient handling with `torch.autograd.Function` subclasses `_ReduceSequenceParallelLoss` and `_ReduceSequenceParallelSum` to compute global loss via autograd-aware all-reduce. This simplifies the logic for both sum and mean reductions, improves gradient correctness, and removes the need for separate metric scaling when `world_size > 1`.
Add `compensate_fsdp_avg` config flag to adjust loss reduction when sequence parallel (SP) is combined with FSDP or accelerate DDP/FSDP. This prevents gradient magnitude from being incorrectly scaled down by an extra factor of SP world size during data-parallel averaging.

- In `GatherLoss` backward, scale gradients by SP world size before splitting, so downstream FSDP averaging does not shrink this path.
- In `SequenceParallelStrategy.reduce_loss`, apply a compensation factor (ulysses_size) when `compensate_fsdp_avg` is enabled.
- Automatically set `compensate_fsdp_avg=True` in `TransformersModel` when using NativeFSDPStrategy or AccelerateStrategy with both SP and data parallelism active.
- Add 'kernels' as an optional dependency group in pyproject.toml
- Refactor CI container test script to use a reusable installation function
- Install twinkle with kernels in both debug and release modes for consistency
- Improve maintainability by centralizing the installation command
Update `_load_from_hub` function to handle API changes in `select_revision_or_version` and `get_kernel` calls. The changes introduce try-except blocks to catch `TypeError` exceptions, allowing the function to work with both modern keyword-based APIs and older positional argument variants. This ensures compatibility across different versions of the kernels module without breaking existing functionality.
Add additional import statements and validation steps to ensure the required kernel 'kernels-test/flattened-build' can be successfully loaded before proceeding with the test. This prevents test failures due to missing or incompatible kernel environments and provides clearer skip messages when the kernel is unavailable.
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @meichangsu1, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the twinkle library by improving its integration with the kernels package and refining its sequence parallel training capabilities. It ensures robustness against kernels API changes and provides a more accurate and compatible loss reduction mechanism for sequence parallelism, particularly when used in conjunction with FSDP. These changes aim to make the library more stable and performant in distributed training environments.

Highlights

  • Kernel API Compatibility: Implemented backward compatibility for the kernels package API, specifically for select_revision_or_version and get_kernel functions, to handle variations in argument signatures across different kernels versions.
  • Sequence Parallelism Loss Reduction: Refactored the reduce_loss mechanism within the SequenceParallelStrategy to use custom torch.autograd.Function classes, ensuring correct global loss calculation and gradient propagation, especially when combined with FSDP.
  • FSDP Compensation in Sequence Parallelism: Introduced a compensate_fsdp_avg flag and logic to scale gradients in the backward pass of sequence parallelism, preventing incorrect shrinkage when data-parallel gradient averaging (e.g., FSDP) is also active.
  • CI/CD Improvements: Added a dedicated function install_twinkle_with_kernels to the CI script for consistent installation of twinkle with kernel dependencies, including a fallback to a specific PyPI mirror.
  • Dependency Management: Added kernels as an optional dependency in pyproject.toml to explicitly define its requirement for kernel-related functionalities.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • .dev_scripts/ci_container_test.sh
    • Added install_twinkle_with_kernels function to streamline installation of twinkle with kernel dependencies.
    • Updated CI script to utilize the new install_twinkle_with_kernels function in both debug and release environments.
  • pyproject.toml
    • Added kernels as an optional dependency under the kernels extra.
  • src/twinkle/kernel/function.py
    • Modified select_revision_or_version and get_kernel calls to use try-except TypeError blocks, supporting both keyword and positional arguments for API compatibility.
  • src/twinkle/model/transformers/strategy/sequence_parallel.py
    • Added gradient scaling in the backward pass for sequence parallelism when world_size > 1 to compensate for downstream FSDP averaging.
    • Corrected the indexing logic for prev in pad_and_split_inputs to mask the correct token.
    • Introduced compensate_fsdp_avg field to SequenceParallelConfig.
    • Refactored reduce_loss to use nested _ReduceSequenceParallelLoss and _ReduceSequenceParallelSum torch.autograd.Function classes for autograd-aware global loss reduction.
  • src/twinkle/model/transformers/transformers.py
    • Implemented logic in _ensure_sp_strategy to conditionally set compensate_fsdp_avg in sp_config based on the strategy type (FSDP/Accelerate) and device mesh configuration.
    • Adjusted the assignment of loss_reduction to sp_config to always reflect the loss instance's reduction attribute.
  • tests/kernel/test_function_kernel.py
    • Added a skipTest condition to test_flattened_build_replaces_function to ensure the kernels-test/flattened-build can be successfully loaded before running the test.
Activity
  • The pull request title "skip unitest" is brief and does not fully convey the scope of the changes.
  • The pull request description is empty, providing no initial context from the author.
  • The changes involve significant refactoring and additions across multiple core files, indicating a focus on improving library functionality and compatibility rather than merely skipping tests.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces several important changes, primarily focused on improving sequence parallel training. Key changes include adding kernels as an optional dependency, ensuring backward compatibility with different kernels library API versions, and implementing significant correctness fixes for loss calculation and gradient scaling in sequence parallelism, especially when combined with data parallelism. A bug in label masking for packed sequences is also fixed. My review includes a suggestion to improve code structure by refactoring nested classes for better maintainability. The pull request title 'skip unitest' is not very descriptive of these substantial changes; a more informative title would be helpful.

Comment on lines +986 to +1028
class _ReduceSequenceParallelLoss(torch.autograd.Function):

@staticmethod
def forward(ctx, local_mean: torch.Tensor, num_valid_tokens: torch.Tensor) -> torch.Tensor:
local_tokens = num_valid_tokens.detach().clone()
local_sum = local_mean * local_tokens
if local_tokens.item() == 0:
local_sum = torch.nan_to_num(local_sum)
global_sum = local_sum.detach().clone()
dist.all_reduce(global_sum, group=sequence_parallel._sp_group)
global_tokens = num_valid_tokens.detach().clone()
dist.all_reduce(global_tokens, group=sequence_parallel._sp_group)
ctx.save_for_backward(local_tokens, global_tokens)
if global_tokens.item() == 0:
return local_sum
return global_sum / global_tokens

@staticmethod
def backward(ctx, grad_output: torch.Tensor):
local_tokens, global_tokens = ctx.saved_tensors
if global_tokens.item() == 0:
return torch.zeros_like(grad_output), None
# d(global_mean)/d(local_mean) = local_tokens / global_tokens.
grad_local_mean = grad_output * (local_tokens / global_tokens) * compensate_factor
return grad_local_mean, None

class _ReduceSequenceParallelSum(torch.autograd.Function):

@staticmethod
def forward(ctx, local_sum: torch.Tensor) -> torch.Tensor:
ctx.sum_metric_scale = sum_metric_scale
global_sum = local_sum.detach().clone()
dist.all_reduce(global_sum, group=sequence_parallel._sp_group)
# Keep logging/metric value aligned with non-SP sum semantics under
# outer collect='mean' by removing one SP replication factor.
return global_sum / ctx.sum_metric_scale

@staticmethod
def backward(ctx, grad_output: torch.Tensor):
# Keep training gradient scale unchanged; forward-side scaling is for
# logging/metric alignment under outer collect='mean'.
return grad_output

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For better code structure and maintainability, consider moving the nested classes _ReduceSequenceParallelLoss and _ReduceSequenceParallelSum to the module level. Defining them outside the reduce_loss method would make them reusable, easier to test in isolation, and improve the readability of the reduce_loss method itself. You would need to pass context variables like compensate_factor, sum_metric_scale, and sequence_parallel._sp_group as arguments to the apply method.

@meichangsu1 meichangsu1 deleted the seq_unitest_ljl_fix branch February 13, 2026 09:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants